Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(web): invoke pipeline config exception handling #1831

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

kirangodishala
Copy link
Contributor

  • demonstrate current behavior of PipelineController.invokePipelineConfig as determined by wiremock responses from front50 and orca (InvokePipelineConfigTest) and when PipelineService.trigger throws an exception (PipelineControllerTest)

  • let exceptions during PipelineController.invokePipelineConfig bubble up so gate's http response code more closely reflects what happened

  • change PipelineController to use constructor autowiring to prepare for changes to the constructor logic

  • include information from downstream services in error responses from PipelineController.invokePipelineConfig by handling RetrofitErrors with SpinnakerRetrofitErrorHandler

  • As part of this PipelineController.invokePipelineConfig no longer logs its own message for RetrofitErrors. There's some loss of information with this, as the initiator of the downstream communication is from no longer clear. A subsequent commit restores this.

  • chain Spinnaker*Exceptions in PipelineController.invokePipelineConfig so it's clear which operation is failing. This improves both logging and gate's http response.

  • As part of this, remove the no-op catch and throw for NotFoundException. With no other more general catch block, this code isn't necessary.

  • introduce new configuration property services.front50.applicationRefreshInitialDelayMs which provides an initial delay in milliseconds for the thread that refreshes the applications cache in ApplicationService It's primarily to facilitate testing, but it seems reasonable someone might want use it production to keep things quiet at startup.

* introduce new configuration property services.front50.applicationRefreshInitialDelayMs which provides an initial delay in milliseconds for the thread that refreshes the applications cache in ApplicationService
It's primarily to facilitate testing, but it seems reasonable someone might want use it
production to keep things quiet at startup.

* demonstrate current behavior of PipelineController.invokePipelineConfig as determined by wiremock responses from front50 and orca (InvokePipelineConfigTest) and when PipelineService.trigger throws an exception (PipelineControllerTest)

* let exceptions during PipelineController.invokePipelineConfig bubble up so gate's http response code more closely reflects what happened

* change PipelineController to use constructor autowiring to prepare for changes to the constructor logic

* include information from downstream services in error responses from PipelineController.invokePipelineConfig by handling RetrofitErrors with SpinnakerRetrofitErrorHandler

* As part of this PipelineController.invokePipelineConfig no longer logs its own message for RetrofitErrors.  There's some loss of information with this, as the initiator of the downstream communication is from no longer clear.  A subsequent commit restores this.

* chain Spinnaker*Exceptions in PipelineController.invokePipelineConfig so it's clear which operation is failing.  This improves both logging and gate's http response.

* As part of this, remove the no-op catch and throw for NotFoundException.  With no other
more general catch block, this code isn't necessary.
@dbyron-sf
Copy link
Contributor

The key is that gate used to respond with 400 even the request wasn't bad. With this PR, the response better reflects what's actually going on so callers can better decide when to retry.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants